96 research outputs found

    A Multiple Hypothesis Testing Approach to Low-Complexity Subspace Unmixing

    Full text link
    Subspace-based signal processing traditionally focuses on problems involving a few subspaces. Recently, a number of problems in different application areas have emerged that involve a significantly larger number of subspaces relative to the ambient dimension. It becomes imperative in such settings to first identify a smaller set of active subspaces that contribute to the observation before further processing can be carried out. This problem of identification of a small set of active subspaces among a huge collection of subspaces from a single (noisy) observation in the ambient space is termed subspace unmixing. This paper formally poses the subspace unmixing problem under the parsimonious subspace-sum (PS3) model, discusses connections of the PS3 model to problems in wireless communications, hyperspectral imaging, high-dimensional statistics and compressed sensing, and proposes a low-complexity algorithm, termed marginal subspace detection (MSD), for subspace unmixing. The MSD algorithm turns the subspace unmixing problem for the PS3 model into a multiple hypothesis testing (MHT) problem and its analysis in the paper helps control the family-wise error rate of this MHT problem at any level α∈[0,1]\alpha \in [0,1] under two random signal generation models. Some other highlights of the analysis of the MSD algorithm include: (i) it is applicable to an arbitrary collection of subspaces on the Grassmann manifold; (ii) it relies on properties of the collection of subspaces that are computable in polynomial time; and (iiiiii) it allows for linear scaling of the number of active subspaces as a function of the ambient dimension. Finally, numerical results are presented in the paper to better understand the performance of the MSD algorithm.Comment: Submitted for journal publication; 33 pages, 14 figure

    Model Selection: Two Fundamental Measures of Coherence and Their Algorithmic Significance

    Full text link
    The problem of model selection arises in a number of contexts, such as compressed sensing, subset selection in linear regression, estimation of structures in graphical models, and signal denoising. This paper generalizes the notion of \emph{incoherence} in the existing literature on model selection and introduces two fundamental measures of coherence---termed as the worst-case coherence and the average coherence---among the columns of a design matrix. In particular, it utilizes these two measures of coherence to provide an in-depth analysis of a simple one-step thresholding (OST) algorithm for model selection. One of the key insights offered by the ensuing analysis is that OST is feasible for model selection as long as the design matrix obeys an easily verifiable property. In addition, the paper also characterizes the model-selection performance of OST in terms of the worst-case coherence, \mu, and establishes that OST performs near-optimally in the low signal-to-noise ratio regime for N x C design matrices with \mu = O(N^{-1/2}). Finally, in contrast to some of the existing literature on model selection, the analysis in the paper is nonasymptotic in nature, it does not require knowledge of the true model order, it is applicable to generic (random or deterministic) design matrices, and it neither requires submatrices of the design matrix to have full rank, nor does it assume a statistical prior on the values of the nonzero entries of the data vector.Comment: 5 pages; Accepted for Proc. 2010 IEEE International Symposium on Information Theory (ISIT 2010
    • …
    corecore